1,295 research outputs found

    Smartphone-based food diagnostic technologies: A review

    Get PDF
    A new generation of mobile sensing approaches offers significant advantages over traditional platforms in terms of test speed, control, low cost, ease-of-operation, and data management, and requires minimal equipment and user involvement. The marriage of novel sensing technologies with cellphones enables the development of powerful lab-on-smartphone platforms for many important applications including medical diagnosis, environmental monitoring, and food safety analysis. This paper reviews the recent advancements and developments in the field of smartphone-based food diagnostic technologies, with an emphasis on custom modules to enhance smartphone sensing capabilities. These devices typically comprise multiple components such as detectors, sample processors, disposable chips, batteries and software, which are integrated with a commercial smartphone. One of the most important aspects of developing these systems is the integration of these components onto a compact and lightweight platform that requires minimal power. To date, researchers have demonstrated several promising approaches employing various sensing techniques and device configurations. We aim to provide a systematic classification according to the detection strategy, providing a critical discussion of strengths and weaknesses. We have also extended the analysis to the food scanning devices that are increasingly populating the Internet of Things (IoT) market, demonstrating how this field is indeed promising, as the research outputs are quickly capitalized on new start-up companies

    A neural network approach to human posture classification and fall detection using RGB-D camera

    Get PDF
    In this paper, we describe a human posture classification and a falling detector module suitable for smart homes and assisted living solutions. The system uses a neural network that processes the human joints produced by a skeleton tracker using the depth streams of an RGB-D sensor. The neural network is able to recognize standing, sitting and lying postures. Using only the depth maps from the sensor, the system can work in poor light conditions and guarantees the privacy of the person. The neural network is trained with a dataset produced with the Kinect tracker, but it is also tested with a different human tracker (NiTE). In particular, the aim of this work is to analyse the behaviour of the neural network even when the position of the extracted joints is not reliable and the provided skeleton is confused. Real-time tests have been carried out covering the whole operative range of the sensor (up to 3.5 m). Experimental results have shown an overall accuracy of 98.3% using the NiTE tracker for the falling tests, with the worst accuracy of 97.5%

    An Ambient Assisted Living Approach in Designing Domiciliary Services Combined With Innovative Technologies for Patients With Alzheimer’s Disease: A Case Study

    Get PDF
    Background: Alzheimer’s disease (AD) is one of the most disabling diseases to affect large numbers of elderly people worldwide. Because of the characteristics of this disease, patients with AD require daily assistance from service providers both in nursing homes and at home. Domiciliary assistance has been demonstrated to be cost effective and efficient in the first phase of the disease, helping to slow down the course of the illness, improve the quality of life and care, and extend independence for patients and caregivers. In this context, the aim of this work is to demonstrate the technical effectiveness and acceptability of an innovative domiciliary smart sensor system for providing domiciliary assistance to patients with AD which has been developed with an Ambient Assisted Living (AAL) approach. Methods: The design, development, testing, and evaluation of the innovative technological solution were performed by a multidisciplinary team. In all, 15 sociomedical operators and 14 patients with AD were directly involved in defining the endusers’ needs and requirements, identifying design principles with acceptability and usability features and evaluating the technological solutions before and after the real experimentation. Results: A modular technological system was produced to help caregivers continuously monitor the health status, safety, and daily activities of patients with AD. During the experimentation, the acceptability, utility, usability, and efficacy of this system were evaluated as quite positive. Conclusion: The experience described in this article demonstrated that AAL technologies are feasible and effective nowadays and can be actively used in assisting patients with AD in their homes. The extensive involvement of caregivers in the experimentation allowed to assess that there is, through the use of the technological system, a proven improvement in care performance and efficiency of care provision by both formal and informal caregivers and consequently an increase in the quality of life of patients, their relatives, and their caregivers

    A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data

    Get PDF
    Human activity recognition is an important area in computer vision, with its wide range of applications including ambient assisted living. In this paper, an activity recognition system based on skeleton data extracted from a depth camera is presented. The system makes use of machine learning techniques to classify the actions that are described with a set of a few basic postures. The training phase creates several models related to the number of clustered postures by means of a multiclass Support Vector Machine (SVM), trained with Sequential Minimal Optimization (SMO). The classification phase adopts the X-means algorithm to find the optimal number of clusters dynamically. The contribution of the paper is twofold. The first aim is to perform activity recognition employing features based on a small number of informative postures, extracted independently from each activity instance; secondly, it aims to assess the minimum number of frames needed for an adequate classification. The system is evaluated on two publicly available datasets, the Cornell Activity Dataset (CAD-60) and the Telecommunication Systems Team (TST) Fall detection dataset. The number of clusters needed to model each instance ranges from two to four elements. The proposed approach reaches excellent performances using only about 4 s of input data (~100 frames) and outperforms the state of the art when it uses approximately 500 frames on the CAD-60 dataset. The results are promising for the test in real context

    What do humans feel with mistreated humans, animals, robots and objects? Exploring the role of cognitive empathy

    Get PDF
    The aim of this paper is to present a study in which we compare the degree of empathy that a convenience sample of university students expressed with humans, animals, robots and objects. The present study broadens the spectrum of elements eliciting empathy that has been previously explored while at the same time comparing different facets of empathy. Here we used video clips of mistreated humans, animals, robots, and objects to elicit empathic reactions and to measure attributed emotions. The use of such a broad spectrum of elements allowed us to infer the role of different features of the selected elements, specifically experience (how much the element is able to understand the events of the environment) and degree of anthropo-/zoomorphization. The results show that participants expressed empathy differently with the various social actors being mistreated. A comparison between the present results and previous results on vicarious feelings shows that congruence between self and other experience was not always held, and it was modulated by familiarity with robotic artefacts of daily usage

    Modeling user experience in electronic entertainment using psychophysiological measurements

    Get PDF
    Analyses of user experience in electronic entertainment industry currently rely on self-reporting methods, such as surveys, ratings, focus group interviews, etc. We argue that self-reporting alone carries inherent problems - mainly the subject bias and interpretation difficulties - and therefore should not be used as a sole metric. To deal with this problem, we propose a possibility of creating a model of consumer experience based on psychophysiological measurements and describe how such model can be trained using machine learning methods. Models trained exclusively on real-time data produced by autonomic nervous system and involuntary physiological responses is not susceptible to subjective bias, misinterpretation and imprecision caused by the delay between the experience and the interview. This paper proposes a potentially promising direction for future research and presents an introductory analysis of available biological data sources, their relevance to user experience modeling and technical prerequisites for their collection. Multiple psychophysiological measurements (such as heart rate, electrodermal activity or respiratory activity) should be used in combination with self-reporting methods to prepare training sets for machine learning models. During our initial experiments, we collected time-series heart rate data for two computer games - Hearthstone and Dota 2. This preliminary analysis suggests the existence of a correlation between psychophysiological measurements and in-game events. Actual ready-to-use user experience models are out of the scope of this paper

    User indoor localisation system enhances activity recognition: A proof of concept

    Get PDF
    Older people would like to live independently in their home as long as possible. They want to reduce the risk of domestic accidents because of polypharmacy, physical weakness and other mental illnesses, which could increase the risks of domestic accidents (i.e. a fall). Changes in the behaviour of healthy older people could be correlated with cognitive disorders; consequently, early intervention could delay the deterioration of the disease. Over the last few years, activity recognition systems have been developed to support the management of senior citizensâ\u80\u99 daily life. In this context, this paper aims to go beyond the state-of-the-art presenting a proof of concept where information on body movement, vital signs and userâ\u80\u99s indoor locations are aggregated to improve the activity recognition task. The presented system has been tested in a realistic environment with three users in order to assess the feasibility of the proposed method. These results encouraged the use of this approach in activity recognition applications; indeed, the overall accuracy values, amongst others, are satisfactory increased (+2.67% DT, +7.39% SVM, +147.37% NN)

    A biomechanical analysis of surgeon’s gesture in a laparoscopic virtual scenario

    Get PDF
    Minimally invasive surgery (MIS) has become very common in recent years thanks to many advantages that patients can get. However, due to the difficulties surgeons encounter to learn and manage this technique, several training methods and metrics have been proposed in order to, respectively, improve surgeon's abilities and assess his/her surgical skills. In this context, this paper presents a biomechanical analysis method of the surgeon's movements, during exercise involving instrument tip positioning and depth perception in a laparoscopic virtual environment. Estimation of some biomechanical parameters enables us to assess the abilities of surgeons and to distinguish an expert surgeon from a novice. A segmentation algorithm has been defined to deeply investigate the surgeon's movements and to divide them into many sub-movements
    corecore